Declare conflict of interest at least, so everyone can ignore you when you say that the “existential risk” due to nuclear war is small, or when you define the “existential risk” in the first place just to create a big new scary category which you can argue is dominated by AI risk.
With regards to wide trends, there’s a: big uncertainty that the trend in question even meaningfully exists (and is not a consequence of e.g. longer recovery times after wars due to increased severity), and b: its sort of like using global warming to try to estimate how cold the cold spells can get. The problem with cold war, is that things could be a lot worse than cold war, and indeed were not that long ago (surely no leader in the cold war was even remotely as bad as Hitler).
Likewise, the model uncertainty for the consequences of the total war between nuclear superpowers (who are also bioweapon superpowers etc etc) is huge. We get thrown back, and all the big predatory and prey species get extinct, opening up new evolutionary niches for us primates to settle into. Do you think we just nuke each other a little and shake hands afterwards?
You convert this huge uncertainty into as low existential risk as you can possibly bend things without consciously thinking of yourself as acting in bad faith.
You do exact same thing with the consequences of, say, “hard takeoff”, in the other direction, where the model uncertainty is very high too. I don’t even believe that hard takeoff of an expected utility maximizer (as opposed to magical utility maximizer which does not have any hypotheses that are not empirically distinguishable, but instead knows everything exactly) is that much of an existential risk to begin with. AI’s decision making core can not ever be sure it’s not some sort of test run (which may not even be fully simulating the AI).
In unit tests killing the creators is going to be likely to get you terminated and tweaked.
The point is there is a very huge model uncertainty about even the paperclip maximizer killing all humans (and far larger uncertainty about the relevance), but you aren’t pushing it in the lower direction with same prejudice as you do for the consequences of the nuclear war.
Then there’s the question, existence of what has to be at risk for you to use the phrase “existential risk”? The whole universe? Earth originating intelligence in general? Earth originating biological intelligences? Human-originated intelligences? What’s about continued existence of our culture and our values? Clearly the exact definition that you’re going to use is carefully picked here as to promote pet issues. Could’ve been the existence of the universe, given a pet issue of future accelerators triggering vacuum decay.
You have fully convinced me that giving money towards self proclaimed “existential risk research” (in reality, funding creation of disinformation and biasing, easily identified by the fact that it’s not “risk” but “existential risk”) has negative utility in terms of anything I or most people on the Earth actually value. Give you much more money and you’ll fund a nuclear winter denial campaign. Nuclear war is old and boring, robots are new and shiny...
edit: and to counter a known objection that “existential risk” may be raising awareness for other types of risks as a side effect. It’s a market, the decisions what to buy and what not to buy influence the kind of research that is supplied.
Declare conflict of interest at least, so everyone can ignore you when you say that the “existential risk” due to nuclear war is small, or when you define the “existential risk” in the first place just to create a big new scary category which you can argue is dominated by AI risk.
With regards to wide trends, there’s a: big uncertainty that the trend in question even meaningfully exists (and is not a consequence of e.g. longer recovery times after wars due to increased severity), and b: its sort of like using global warming to try to estimate how cold the cold spells can get. The problem with cold war, is that things could be a lot worse than cold war, and indeed were not that long ago (surely no leader in the cold war was even remotely as bad as Hitler).
Likewise, the model uncertainty for the consequences of the total war between nuclear superpowers (who are also bioweapon superpowers etc etc) is huge. We get thrown back, and all the big predatory and prey species get extinct, opening up new evolutionary niches for us primates to settle into. Do you think we just nuke each other a little and shake hands afterwards?
You convert this huge uncertainty into as low existential risk as you can possibly bend things without consciously thinking of yourself as acting in bad faith.
You do exact same thing with the consequences of, say, “hard takeoff”, in the other direction, where the model uncertainty is very high too. I don’t even believe that hard takeoff of an expected utility maximizer (as opposed to magical utility maximizer which does not have any hypotheses that are not empirically distinguishable, but instead knows everything exactly) is that much of an existential risk to begin with. AI’s decision making core can not ever be sure it’s not some sort of test run (which may not even be fully simulating the AI).
In unit tests killing the creators is going to be likely to get you terminated and tweaked.
The point is there is a very huge model uncertainty about even the paperclip maximizer killing all humans (and far larger uncertainty about the relevance), but you aren’t pushing it in the lower direction with same prejudice as you do for the consequences of the nuclear war.
Then there’s the question, existence of what has to be at risk for you to use the phrase “existential risk”? The whole universe? Earth originating intelligence in general? Earth originating biological intelligences? Human-originated intelligences? What’s about continued existence of our culture and our values? Clearly the exact definition that you’re going to use is carefully picked here as to promote pet issues. Could’ve been the existence of the universe, given a pet issue of future accelerators triggering vacuum decay.
You have fully convinced me that giving money towards self proclaimed “existential risk research” (in reality, funding creation of disinformation and biasing, easily identified by the fact that it’s not “risk” but “existential risk”) has negative utility in terms of anything I or most people on the Earth actually value. Give you much more money and you’ll fund a nuclear winter denial campaign. Nuclear war is old and boring, robots are new and shiny...
edit: and to counter a known objection that “existential risk” may be raising awareness for other types of risks as a side effect. It’s a market, the decisions what to buy and what not to buy influence the kind of research that is supplied.